57 research outputs found

    EEG Correlates of Learning From Speech Presented in Environmental Noise

    Get PDF
    How the human brain retains relevant vocal information while suppressing irrelevant sounds is one of the ongoing challenges in cognitive neuroscience. Knowledge of the underlying mechanisms of this ability can be used to identify whether a person is distracted during listening to a target speech, especially in a learning context. This paper investigates the neural correlates of learning from the speech presented in a noisy environment using an ecologically valid learning context and electroencephalography (EEG). To this end, the following listening tasks were performed while 64-channel EEG signals were recorded: (1) attentive listening to the lectures in background sound, (2) attentive listening to the background sound presented alone, and (3) inattentive listening to the background sound. For the first task, 13 lectures of 5 min in length embedded in different types of realistic background noise were presented to participants who were asked to focus on the lectures. As background noise, multi-talker babble, continuous highway, and fluctuating traffic sounds were used. After the second task, a written exam was taken to quantify the amount of information that participants have acquired and retained from the lectures. In addition to various power spectrum-based EEG features in different frequency bands, the peak frequency and long-range temporal correlations (LRTC) of alpha-band activity were estimated. To reduce these dimensions, a principal component analysis (PCA) was applied to the different listening conditions resulting in the feature combinations that discriminate most between listening conditions and persons. Linear mixed-effect modeling was used to explain the origin of extracted principal components, showing their dependence on listening condition and type of background sound. Following this unsupervised step, a supervised analysis was performed to explain the link between the exam results and the EEG principal component scores using both linear fixed and mixed-effect modeling. Results suggest that the ability to learn from the speech presented in environmental noise can be predicted by the several components over the specific brain regions better than by knowing the background noise type. These components were linked to deterioration in attention, speech envelope following, decreased focusing during listening, cognitive prediction error, and specific inhibition mechanisms

    EEG correlates of learning from speech presented in environmental noise

    Get PDF
    How the human brain retains relevant vocal information while suppressing irrelevant sounds is one of the ongoing challenges in cognitive neuroscience. Knowledge of the underlying mechanisms of this ability can be used to identify whether a person is distracted during listening to a target speech, especially in a learning context. This paper investigates the neural correlates of learning from the speech presented in a noisy environment using an ecologically valid learning context and electroencephalography (EEG). To this end, the following listening tasks were performed while 64-channel EEG signals were recorded: (1) attentive listening to the lectures in background sound, (2) attentive listening to the background sound presented alone, and (3) inattentive listening to the background sound. For the first task, 13 lectures of 5 min in length embedded in different types of realistic background noise were presented to participants who were asked to focus on the lectures. As background noise, multi-talker babble, continuous highway, and fluctuating traffic sounds were used. After the second task, a written exam was taken to quantify the amount of information that participants have acquired and retained from the lectures. In addition to various power spectrum-based EEG features in different frequency bands, the peak frequency and long-range temporal correlations (LRTC) of alpha-band activity were estimated. To reduce these dimensions, a principal component analysis (PCA) was applied to the different listening conditions resulting in the feature combinations that discriminate most between listening conditions and persons. Linear mixed-effect modeling was used to explain the origin of extracted principal components, showing their dependence on listening condition and type of background sound. Following this unsupervised step, a supervised analysis was performed to explain the link between the exam results and the EEG principal component scores using both linear fixed and mixed-effect modeling. Results suggest that the ability to learn from the speech presented in environmental noise can be predicted by the several components over the specific brain regions better than by knowing the background noise type. These components were linked to deterioration in attention, speech envelope following, decreased focusing during listening, cognitive prediction error, and specific inhibition mechanisms

    Sound Localization in Single-Sided Deaf Participants Provided With a Cochlear Implant

    Get PDF
    Spatial hearing is crucial in real life but deteriorates in participants with severe sensorineural hearing loss or single-sided deafness. This ability can potentially be improved with a unilateral cochlear implant (CI). The present study investigated measures of sound localization in participants with single-sided deafness provided with a CI. Sound localization was measured separately at eight loudspeaker positions (4°, 30°, 60°, and 90°) on the CI side and on the normal-hearing side. Low- and high-frequency noise bursts were used in the tests to investigate possible differences in the processing of interaural time and level differences. Data were compared to normal-hearing adults aged between 20 and 83. In addition, the benefit of the CI in speech understanding in noise was compared to the localization ability. Fifteen out of 18 participants were able to localize signals on the CI side and on the normal-hearing side, although performance was highly variable across participants. Three participants always pointed to the normal-hearing side, irrespective of the location of the signal. The comparison with control data showed that participants had particular difficulties localizing sounds at frontal locations and on the CI side. In contrast to most previous results, participants were able to localize low-frequency signals, although they localized high-frequency signals more accurately. Speech understanding in noise was better with the CI compared to testing without CI, but only at a position where the CI also improved sound localization. Our data suggest that a CI can, to a large extent, restore localization in participants with single-sided deafness. Difficulties may remain at frontal locations and on the CI side. However, speech understanding in noise improves when wearing the CI. The treatment with a CI in these participants might provide real-world benefits, such as improved orientation in traffic and speech understanding in difficult listening situations

    Selective attention modulates human auditory brainstem responses: relative contributions of frequency and spatial cues.

    Get PDF
    Selective attention is the mechanism that allows focusing one's attention on a particular stimulus while filtering out a range of other stimuli, for instance, on a single conversation in a noisy room. Attending to one sound source rather than another changes activity in the human auditory cortex, but it is unclear whether attention to different acoustic features, such as voice pitch and speaker location, modulates subcortical activity. Studies using a dichotic listening paradigm indicated that auditory brainstem processing may be modulated by the direction of attention. We investigated whether endogenous selective attention to one of two speech signals affects amplitude and phase locking in auditory brainstem responses when the signals were either discriminable by frequency content alone, or by frequency content and spatial location. Frequency-following responses to the speech sounds were significantly modulated in both conditions. The modulation was specific to the task-relevant frequency band. The effect was stronger when both frequency and spatial information were available. Patterns of response were variable between participants, and were correlated with psychophysical discriminability of the stimuli, suggesting that the modulation was biologically relevant. Our results demonstrate that auditory brainstem responses are susceptible to efferent modulation related to behavioral goals. Furthermore they suggest that mechanisms of selective attention actively shape activity at early subcortical processing stages according to task relevance and based on frequency and spatial cues

    The experimental paradigm.

    No full text
    <p>(A) The experiment was composed of eight five-minute blocks. The gender of the voice to be attended in the first block (Attend A) was counterbalanced across subjects, and it alternated in the subsequent blocks. During a given block, male and female standard vowel sounds were presented synchronously every 300 ms and participants had to detect infrequent deviant sounds in the attended voice. Only standard sounds were used in the analysis. (B) shows the spectra of the standard sounds. On the left is the male standard sound with a F0 of 170 Hz and on the right, the female sound with a F0 of 225 Hz.</p

    Selective attention modulates brainstem response at individual level, both in dichotic and diotic presentation.

    No full text
    <p>Normalized spectral power at the fundamental frequencies of the male (A,B) and female stimuli (C,D) for each participant in the dichotic (A,C) and diotic (B,D) condition. White bars indicate the response amplitude while attending to the female voice; grey bars indicate the response amplitude while attending to the male voice. Error bars indicate the standard error of the mean calculated by bootstrap resampling. Stars indicate a significant difference of two standard errors of the mean (p<0.05) between attending to the male and female voice.</p

    Frequency-following response reflects neural phase-locking to the fundamental frequency of male and female voice.

    No full text
    <p>Grand average brainstem frequency-following response (FFR) for each experimental condition, plotted in the time (upper panel) and frequency (lower panel) domain. Red lines indicate the response while attending to the female voice; black lines indicate the response while attending to the male voice. FFR obtained during dichotic presentation are shown on the left, diotic presentation on the right. Both vowels’ fundamental frequencies (170 Hz male, 225 Hz female) yielded clearly identifiable maxima in the individual and average FFR spectra.</p

    Attentional modulation of the brainstem response correlates with behaviour.

    No full text
    <p>The individual neural attentional modulation indices and target discriminability measures (d′) are negatively correlated in both the dichotic and diotic condition (respectively r = −.56, p = 0.037 and r = −.58, p = 0.031). Each triangle represents data from one participant in either the dichotic (solid triangle) or the diotic (hollow triangle) condition. Participants correspond to numbers 1, 5, 7, 9, 12, 13 and 14 as shown on Fig. 4.</p
    • …
    corecore